# English Text Generation
Unireason Qwen3 14B RL I1 GGUF
Apache-2.0
UniReason-Qwen3-14B-RL is a quantized model applicable to multiple domains, especially proficient in text generation and mathematical reasoning tasks.
Large Language Model
Transformers English

U
mradermacher
302
1
Orpheus 3b 0.1 Ft Q8 0 GGUF
Apache-2.0
This model is converted from canopylabs/orpheus-3b-0.1-ft into GGUF format, suitable for text generation tasks.
Large Language Model English
O
dodgeinmedia
22
0
Gemma3 27b Abliterated Dpo
A fine-tuned large language model based on mlabonne/gemma-3-27b-it-abliterated, trained using the Unsloth acceleration framework and Huggingface's TRL library, achieving a 2x efficiency improvement.
Large Language Model
Transformers English

G
summykai
326
3
Deepseek R1 Chinese Law
Apache-2.0
Llama model trained with Unsloth and Huggingface TRL library, achieving 2x faster inference speed
Large Language Model
Transformers English

D
corn6
74
2
Travelbot
Apache-2.0
Llama model trained with Unsloth and Huggingface TRL library, achieving 2x inference speed improvement
Large Language Model
Transformers English

T
kitty528
9,146
2
RWKV7 Goose Pile 168M HF
Apache-2.0
RWKV-7 model using Flash Linear Attention format, trained on the Pile dataset, supporting English text generation tasks.
Large Language Model
Transformers English

R
RWKV
57
2
RWKV7 Goose World3 1.5B HF
Apache-2.0
The RWKV-7 model in flash-linear attention format, supporting English text generation tasks.
Large Language Model English
R
RWKV
70
2
Llama 3.2 11B Vision Medical
Apache-2.0
A model fine-tuned based on unsloth/Llama-3.2-11B-Vision-Instruct, trained using Unsloth and Huggingface's TRL library, achieving a 2x speedup.
Text-to-Image
Transformers English

L
Varu96
25
1
Doge 320M
Apache-2.0
Doge is a sequence transformation model that employs dynamic masked attention mechanisms, capable of state transitions using either multi-layer perceptrons or cross-domain mixture of experts.
Large Language Model
Transformers Supports Multiple Languages

D
SmallDoge
3,028
4
Krx Qwen2 7b It X
Apache-2.0
An instruction-following model fine-tuned based on unsloth/Qwen2-7B-Instruct, trained using Unsloth and TRL libraries with 2x speed improvement
Large Language Model
Transformers Supports Multiple Languages

K
2point5p
18
2
Nanolm 1B Instruct V1.1
Gpl-3.0
NanoLM-1B-Instruct-v1.1 is a small instruction-tuned language model with 1 billion parameters, supporting multi-domain English text generation tasks.
Large Language Model Supports Multiple Languages
N
Mxode
24
1
Llama3.1 8b Instruct Summarize Q4 K M
Apache-2.0
A 4-bit quantized version based on Meta-Llama-3.1-8B-Instruct, trained using Unsloth and Huggingface TRL libraries, achieving 2x speed improvement.
Large Language Model English
L
raaec
107
0
Gemma 2 9B It SPPO Iter3
An 8.9 billion parameter language model developed in the third iteration using self-play preference optimization, starting from google/gemma-2-9b-it and fine-tuned with the UltraFeedback dataset
Large Language Model
Transformers English

G
UCLA-AGI
6,704
125
Athena 70B L3 I1 GGUF
Athena-70B-L3 is a large language model with 70B parameters, supporting English text generation tasks and utilizing parameter-efficient fine-tuning techniques.
Large Language Model
Transformers English

A
mradermacher
141
4
Gemma 2 9b It
Gemma is a series of lightweight open large language models launched by Google, built on the same technology used to create Gemini models, suitable for various text generation tasks.
Large Language Model
Transformers

G
google
336.05k
705
Chewy Lemon Cookie 11B GGUF
Chewy-Lemon-Cookie-11B is an 11B-parameter large language model based on the Mistral architecture, focusing on text generation and role-playing tasks.
Large Language Model English
C
mradermacher
296
2
Shotluck Holmes 1.5
Apache-2.0
Shot2Story-20K is an image-to-text generation model capable of converting input images into coherent textual descriptions or stories.
Image-to-Text
Transformers English

S
RichardLuo
158
3
Mythomax L2 13b Q4 K M GGUF
Other
MythoMax L2 13b is a large language model based on the Q4_K_M quantized version, suitable for text generation tasks.
Large Language Model English
M
Clevyby
1,716
2
Meta Llama Meta Llama 3 8B Instruct 4bits
The instruction-tuned version of Meta Llama 3 with 8B parameters, optimized for dialogue scenarios, demonstrating excellent helpfulness and safety performance.
Large Language Model
Transformers

M
RichardErkhov
28
1
Zephyr Orpo 141b A35b V0.1 GGUF
Apache-2.0
A 141-billion parameter Mixture of Experts (MoE) model fine-tuned from Mixtral-8x22B-v0.1, with 35 billion active parameters, primarily designed for English text generation tasks
Large Language Model English
Z
MaziyarPanahi
10.04k
29
Recurrentgemma 2b
RecurrentGemma is an open language model family developed by Google based on a novel recurrent architecture, offering both pre-trained and instruction-tuned versions suitable for various text generation tasks.
Large Language Model
Transformers

R
google
1,941
92
Gemma 1.1 2b It
Gemma is a lightweight open model series launched by Google, built on the same technology as Gemini, suitable for various text generation tasks.
Large Language Model
Transformers

G
google
71.01k
158
Ministral 3b Instruct
Apache-2.0
Ministral is a small-scale language model series based on the Mistral architecture, with a parameter size of 3 billion, primarily designed for English text generation tasks.
Large Language Model
Transformers English

M
ministral
15.89k
53
Gemma 7b Zephyr Sft
Other
A large language model based on Google's Gemma 7B, fine-tuned using the Zephyr SFT recipe, primarily for text generation tasks
Large Language Model
Transformers

G
wandb
19
2
Litellama 460M 1T
MIT
LiteLlama is a streamlined version of Meta AI's LLaMa 2, featuring only 460 million parameters and trained on 1 trillion tokens as an open-source language model
Large Language Model
Transformers English

L
ahxt
1,225
162
Tinymistral 248M
Apache-2.0
A language model scaled down from Mistral 7B to 248 million parameters, designed for text generation tasks and suitable for downstream task fine-tuning.
Large Language Model
Transformers English

T
Locutusque
1,127
46
Mythalion 13B GGUF
Mythalion 13B is a 13B-parameter large language model developed by PygmalionAI, based on the Llama architecture, specializing in text generation and instruction-following tasks.
Large Language Model English
M
TheBloke
2,609
67
Llama 2 7b Hf
Llama 2 is a 7-billion-parameter pre-trained generative text model developed by Meta, part of the open-source large language model series
Large Language Model
Transformers English

L
meta-llama
914.57k
2,038
Cerebras GPT 111M
Apache-2.0
A 111M parameter model in the Cerebras-GPT series, adopting GPT-3 style architecture, trained on The Pile dataset, achieving compute-optimal performance following Chinchilla scaling laws.
Large Language Model
Transformers English

C
cerebras
5,975
76
Pythia 1b
Apache-2.0
Pythia-1B is a language model specialized for interpretability research developed by EleutherAI, belonging to the 1-billion-parameter version in the Pythia suite, trained on The Pile dataset.
Large Language Model
Transformers English

P
EleutherAI
79.69k
38
Pythia 12b
Apache-2.0
Pythia-12B is the largest model in EleutherAI's scalable language model suite, with 12 billion parameters, specifically designed to advance scientific research on large language models
Large Language Model
Transformers English

P
EleutherAI
9,938
136
Comet Atomic En
An English event reasoning model based on the T5 architecture, used to analyze event prerequisites, effects, intentions, and reactions
Large Language Model
Transformers English

C
svjack
319
3
Flan T5 Base Samsum
Apache-2.0
A text generation model fine-tuned on the samsum dialogue summarization dataset based on Google's flan-t5-base model, excelling in dialogue summarization tasks
Large Language Model
Transformers English

F
achimoraites
15
3
Pythia 6.9b
Apache-2.0
Pythia-6.9B is a large-scale language model developed by EleutherAI, part of the Pythia scalable suite, specifically designed to facilitate interpretability research.
Large Language Model
Transformers English

P
EleutherAI
46.72k
54
Pythia 1b Deduped
Apache-2.0
Pythia-1B Deduplicated is a language model developed by EleutherAI specifically for interpretability research, trained on the deduplicated Pile dataset using Transformer architecture with 1 billion parameters
Large Language Model
Transformers English

P
EleutherAI
19.89k
19
Pythia 410m
Apache-2.0
Pythia is a series of causal language models developed by EleutherAI, specifically designed for interpretability research. It includes 8 model sizes ranging from 70 million to 12 billion parameters, providing 154 training checkpoints.
Large Language Model
Transformers English

P
EleutherAI
83.28k
25
Pythia 1.4b
Apache-2.0
Pythia-1.4B is a 1.2 billion parameter causal language model developed by EleutherAI, part of the Pythia scale suite, specifically designed for interpretability research.
Large Language Model
Transformers English

P
EleutherAI
60.98k
23
Opt 2.7b
Other
OPT is an open-source large language model series launched by Meta AI, with parameter scales ranging from 125 million to 175 billion, aimed at promoting open research in large-scale language models.
Large Language Model English
O
facebook
53.87k
83
Opt 1.3b
Other
OPT is an open-source large language model series launched by Meta AI, benchmarked against the GPT-3 architecture, aimed at promoting reproducibility and societal impact discussions in large model research.
Large Language Model English
O
facebook
196.07k
168
T5 Efficient Xl
Apache-2.0
T5 Efficient XL is a deep narrow variant of Google's T5 model, focusing on improving downstream task performance by increasing model depth rather than width.
Large Language Model English
T
google
63
1
- 1
- 2
Featured Recommended AI Models